Introducing an Adaptive VLR Algorithm Using Learning Automata for Multilayer Perceptron

نویسندگان

  • Behbood MASHOUFI
  • Mohammad Bagher MENHAJ
  • Sayed A. MOTAMEDI
  • Mohammad R. MEYBODI
چکیده

One of the biggest limitations of BP algorithm is its low rate of convergence. The Variable Learning Rate (VLR) algorithm represents one of the well-known techniques that enhance the performance of the BP. Because the VLR parameters have important influence on its performance, we use learning automata (LA) to adjust them. The proposed algorithm named Adaptive Variable Learning Rate (AVLR) algorithm dynamically tunes the VLR parameters by learning automata according to the error changes. Simulation results on some practical problems such as sinusoidal function approximation, nonlinear system identification, phoneme recognition, Persian printed letter recognition helped us better to judge the merit of the proposed AVLR method. key words: multilayer neural network, backpropagation, variable learning rate, learning automata

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Convergence Analysis of Adaptive Recurrent Neural Network

This paper presents analysis of a modified Feed Forward Multilayer Perceptron (FMP) by inserting an ARMA (Auto Regressive Moving Average) model at each neuron (processor node) with the Backp ropagation learning algorithm. The stability analysis is presented to establish the convergence theory of the Back propagation algorithm based on the Lyapunov function. Furthermore, the analysis extends the...

متن کامل

Stable Adaptive Momentum for Rapid Online Learning in Nonlinear Systems

We consider the problem of developing rapid, stable, and scalable stochastic gradient descent algorithms for optimisation of very large nonlinear systems. Based on earlier work by Orr et al. on adaptive momentum—an efficient yet extremely unstable stochastic gradient descent algorithm—we develop a stabilised adaptive momentum algorithm that is suitable for noisy nonlinear optimisation problems....

متن کامل

An Improved Conjugate Gradient Based Learning Algorithm for Back Propagation Neural Networks

The conjugate gradient optimization algorithm is combined with the modified back propagation algorithm to yield a computationally efficient algorithm for training multilayer perceptron (MLP) networks (CGFR/AG). The computational efficiency is enhanced by adaptively modifying initial search direction as described in the following steps: (1) Modification on standard back propagation algorithm by ...

متن کامل

Application of ensemble learning techniques to model the atmospheric concentration of SO2

In view of pollution prediction modeling, the study adopts homogenous (random forest, bagging, and additive regression) and heterogeneous (voting) ensemble classifiers to predict the atmospheric concentration of Sulphur dioxide. For model validation, results were compared against widely known single base classifiers such as support vector machine, multilayer perceptron, linear regression and re...

متن کامل

Hybrid Supervised Learning in MLP using Real-coded GA and Back-propagation

This paper addresses a classification task of pattern recognition by combining effectiveness of evolutionary and gradient descent techniques. We are proposing a hybrid supervised learning approach using real-coded GA and back-propagation to optimize the connection weights of multilayer perceptron. The following learning algorithm overcomes the problems and drawbacks of individual technique by i...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2003